• Installation
  • Features
    • Usage
    • Flexible Environment
    • Multistep-add
    • Prioritized Experience Replay
    • Nstep Experience Replay
    • Large Batch Experience Replay
    • Reverse Experience Replay
    • Hindsight Experience Replay (HER)
    • Memory Compression
    • Map Large Data on File
    • Multiprocess Learning (Ape-X)
    • Save / Load Transitions
  • Design
  • Contributing to cpprb
    • Step by Step Merge Request
    • Developer Guide
  • Examples
    • DQN with PrioritizedReplayBuffer
    • Loss Adjusted Prioritization
    • Multiprocess Learning (Ape-X) with MPPrioritizedReplayBuffer
    • Use MPReplayBuffer with Ray
    • Large Batch Experience Replay (LaBER)
    • Hindsight Experience Replay (HER)
    • Create ReplayBuffer for non-simple gym.Env with helper functions
  • Comparison
    • Functionality
    • Benchmark
  • FAQ
  • Understanding of Experience Replay
  • Survey
    • Experience Replay Optimization (ERO)
    • Remember and Forget for Experience Replay (ReF-ER)
    • Attentive Experience Replay (AER)
    • Competitive Eeperience Replay (CER)
    • Dynamic Experience Replay (DER)
    • Neural Experience Experience Replay Sampler (NERS)
    • Combined Experience Replay (CER)
    • Likelihood-free Importance Weights (LFIW)
    • Parallel Actors and Learners
    • Model-augmented Prioritized Experience Replay (MaPER)
  • Misc
    • Links
    • License
    • Citation
    • Logo
  • Change Log

More

  • GitLab
  • GitHub Mirror
  • Class Reference
  • User Forum
  • Unit Test Coverage
  • Author's Blog

Built with from Grav and Hugo

cpprb > Examples

Examples

  • DQN with PrioritizedReplayBuffer
  • Loss Adjusted Prioritization (LAP)
  • Multiprocess Learning (Ape-X) with MPPrioritizedReplayBuffer
  • Use MPReplayBuffer with Ray
  • Large Batch Experience Replay (LaBER)
  • Hindsight Experience Replay (HER)
  • Create ReplayBuffer for non-simple gym.Env with helper functions